AI agents: When artificial intelligence organizes your life
17 Sept 2025
LMU philosopher Benjamin Lange on the ethical challenges of new AI tools that do things for you like a personal assistant.
17 Sept 2025
LMU philosopher Benjamin Lange on the ethical challenges of new AI tools that do things for you like a personal assistant.
researches ethical issues surrounding artificial intelligence — from responsibility and autonomy to justice. | © Eleanor Großhenning / Sons of Motion Pictures GmbH
When machines don’t just provide answers but can also plan and even take action on their own, what does that mean for the people using them? Philosopher Dr. Benjamin Lange, Junior Research Group Leader at the Professorship of Ethics of Artificial Intelligence and the Munich Center for Machine Learning, is studying the opportunities and threats of the new AI agents.
What exactly are AI agents — and how do they differ from previous AI programs?
Benjamin Lange: AI agents are highly capable systems that are essentially based on large language models like ChatGPT. However, while the latter merely execute your commands, AI agents attempt to recognize your intentions and come up with their own suggestions on the way to achieving the objective. They also have interfaces to external tools — calendars, email programs, databases, and even bank accounts.
Nevertheless, calling them ‘agents’ is misleading. It suggests a degree of autonomy or even intentionality that these systems do not possess. Yes, AI agents can plan and even implement sequences of actions independently, but only if users give them the command to do so. They might be better described as a genie in a bottle: While classic large language models are like oracles you consult, these new systems are designed to be like a helpful genie that takes your requests, breaks them down into individual steps, and ultimately presents you with the result without having to confirm the steps it took.
Can you give us an example?
Sure: Planning your next vacation. Instead of having to give step-by-step instructions, all you need to do is say something like, “Please book me a trip to Spain.” The agent will research options, book the flight, enter the details in your calendar, send the confirmation email, and might even handle the payment. That said, few users trust this system to date.
How else can these assistants be used?
In your everyday life, they can act like your own personal executive assistant, coordinating your appointments and paying your bills — doing all the things that otherwise take up so much time. In the workplace, they can ease your team’s workload, take on routine tasks, and provide creative impulses that you would otherwise have to laboriously generate in workshops. And finally, in education, agents could act as customized tutors, able to remember where you were having difficulties and prepare learning materials for you individually. This would be an enormous opportunity for personalized education, which traditional teaching formats cannot really provide..
At the start of the year, many prototypes were still failing at mundane tasks like opening ZIP files or synchronizing calendars. Today, there are applications that can already reliably compose emails and organize appointments.Benjamin Lange, Junior Research Group Leader at the Professorship of Ethics of Artificial Intelligence
Are these agents already part of our everyday life, or are they still science fiction?
They are on the verge. At the start of the year, many prototypes were still failing at mundane tasks like opening ZIP files or synchronizing calendars. Today, there are applications that can already reliably compose emails and organize appointments. However, they’re not yet at the point where they’re fully and smoothly integrated into our everyday life. The idea that we could simply get up in the morning and say “Do my to-dos for the week” and everything would happen as if by magic is still science fiction. But their development is progressing rapidly, and it remains to be seen how society will embrace this technology.
What ethical questions do these new possibilities raise?
On the one hand, there’s the question of autonomy: If systems are filtering options and providing plausible-seeming suggestions, are we as users still making our own decisions? Then there’s the question of responsibility: Who will bear the consequences when an assistant makes a mistake — the user, the developer, or the company? The risk of misuse is also high. The same technology that makes your everyday life easier could just as easily be used by trolls to spread disinformation — fast, scalable, and on a massive scale.
Emotional attachment presents another challenge. Voice-recognition systems easily create the impression that you can build a relationship with them. Vulnerable groups — such as children, the elderly, or lonely people — could become dependent in ways that the system cannot reciprocate. And finally, there are questions of fairness: People who are able to use such agents and can afford them financially will have advantages that see others left behind. This could exacerbate existing social inequalities.
Our society needs fundamental standards, transparency, and clear responsibilities for what AI does.Benjamin Lange, Junior Research Group Leader at the Professorship of Ethics of Artificial Intelligence
How can we address these challenges?
Certainly not with algorithms alone. Our society needs fundamental standards, transparency, and clear responsibilities for what AI does — like we have with the TÜV (German Technical Inspection Agency). These rules must not be defined by the tech companies alone. Rather, society must negotiate which systems we want — and which we don’t. Ethics cannot be allowed to remain theoretical, it must be put into practice — in the design of the programs, in their application, and in their regulation.
What could regulation actually look like?
We need a multidimensional approach. Policymakers must create binding frameworks — say, on data privacy or liability issues — and companies must take responsibility and disclose potential risks. And the public should debate how much space we want to give agents in our lives. The EU’s new AI Act, which aims to guarantee the EU’s security, fundamental rights, and values while also promoting innovation, is a start. But it concerns AI in general and isn’t yet tailored to agents. This just shows how hard it is for regulation to keep up with the pace of technological change.
Do we need to invent new ethical concepts in order to evaluate these technologies?
No, we already have strong concepts: responsibility, freedom, justice — all highly relevant in this context. The question is how we apply them in concrete terms. Practical ethics means translating these concepts into guidelines, standards, and policy measures. That’s when philosophical debates can guide our actions — and that is exactly what we need.
From your perspective as a philosopher, are there any classical thinkers who can serve as a point of orientation when it comes to AI agents?
There are, in fact, many who are relevant. Aristotle, for example, discussed virtues and being good — an issue that arises when machines prepare our actions for us. David Hume emphasized the role of habits and emotions — which is exactly what agents focus on with their personalization. And Kant defined autonomy as self-legislation — but this becomes fragile when systems pre-structure decisions for us. Foucault, meanwhile, taught us to critically question power structures. All of these perspectives can help us to reflect anew on how AI is influencing our actions
As AI agents take on more and more tasks, are we losing autonomy or gaining freedom?
Possibly both. We’re gaining freedom when they take routine tasks off our hands. At the same time, we risk simply rubber-stamping decisions and thus gradually losing our autonomy. In a recent essay, I call this the question “Who speaks first?” If, someday in the future, we say, “My AI agent suggested it, and I agreed,” then it’s questionable whether we’re still consciously participating in the decision-making process — or whether we’re just confirming what sounds plausible. And striking this balance is precisely what we’ll need to navigate.
A panel discussion on “The Ethics of AI Agents” will take place on Thursday, 25 September, from 5:30 p.m. to 9 p.m., co-organized by Ben Lange and Johannes Reisinger.
The discussion participants will be: Gitta Kutyniok, holder of the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at LMU, who focuses on explainable and reliable AI systems; Barbara Plank, Professor of AI and Computational Linguistics at LMU, whose research interests lie in robust, domain-adaptive language processing with AI; and Lena Kästner, Professor of Philosophy, Computer Science, and Artificial Intelligence at the University of Bayreuth, who studies the philosophy of AI and cognitive science. The event will be moderated by journalist and television presenter Johannes Büchs, known from Die Sendung mit der Maus.
The event is part of a series called “Comparing Notes on Trustworthy AI” organized by the Applied AI Institute for Europe in cooperation with LMU, the Munich Center for Machine Learning (MCML), the Liquid Legal Institute e.V. and the TUM Think Tank. It is aimed at researchers, students, and anyone interested in ethical and social issues surrounding AI agents.
The panel discussion will take place at the headquarters of the Carl Friedrich von Siemens Stiftung, Südliches Schlossrondell 23, 80638 Munich.
Registration is required